
A recent study warned of the consequences of artificial intelligence, conducted in conjunction with chatty artificial intelligence applications such as Chatbot. Researcher from the University of Missouri, Daniel Shank, stated that 'the ability of artificial intelligence to present itself as a human and participate in long dialogues genuinely opens a new Pandora's box for evil.'
In the study, published in the journal 'Studies of Recognition,' Shank and his colleagues noted a 'real threat' regarding that 'artificial closeness' with applications of artificial intelligence that could lead to a certain 'breakdown' in human relationships. The research group pointed out that 'after weeks and months of intensive conversation between the user and the AI platform, the latter could become a trusted companion for the user, knowing everything about them and their interests.'
Specifically, applications of artificial intelligence are supported by the so-called 'gallium' - a term used by researchers studying the trends of these programs for generating responses that may seem irrelevant or disconnected. This is yet another reason for concern, as this indicates that 'even brief conversations with artificial intelligence can lead one into delusion.'
Researchers warn: 'If we start to think of artificial intelligence in such a way, we begin to believe that it will work for our benefit, but in reality, it can produce things or suggest advice, extraordinarily unacceptable.' They added that these applications 'can harm people, fostering deviant, unethical, and illegal behavior.'
Last week, the company of artificial intelligence OpenAI announced improvements to the 'memory' function in the Chatbot application, meaning that the application will create its responses based on previous interactions between it and the user, which, likely, will enhance the sense of closeness in the relationships between the human and the application.